posted 08-29-2008 01:53 PM
I had the opportunity to attend one of the best lectures I have heard at the APA which was presented by Don Kraphol on Polygraph Principles. It really hit home. I will not repeat his entire lecture here as I certainly would not do it justice, but the essence of his presentation (as I understood it) made it very clear to me that we engage in a number of “mystical procedures” and technical machinations that have absolutely no scientific basis or research support. Virtually all of these are wrapped up in our belief in various technical questions that have been added to the comparison question test over the years to “fix” some perceived weakness in the basic CQT. They all receive special names; they usually result in invented "scientific" terms and many result in a new variation of the CQT which in turn gets a new surname.
I really can’t do the subject justice here but what I came away with from this very well presented, footnoted presentation caused me to ask the following question: “Why do we keep shooting ourselves in the foot?” The available research, over many years, clearly shows that any CQT format requires only a few simple pieces to work properly. They are as follows:
1. An orienting or “introductory” question
2. Irrelevant questions
3. Comparison questions
4. Relevant questions
Such a test should probably begin with and end with an irrelevant question but even the final irrelevant doesn’t really appear to matter. Each “spot” should be comprised of an irrelevant question, a comparison question and a relevant question with no more than 4 relevant questions in a single test. The comparison and/or relevant questions should be rotated on each subsequent test chart so that every comparison question is compared equally to every relevant question.
Symptomatic questions don’t matter. Outside issue questions don’t matter. “Fear/hope” questions don’t matter. Whether one uses inconclusive or exclusive comparison questions doesn’t matter. In fact nothing else matters other than a good, highly organized, well presented, thorough pre-test interview and clear an unambiguous relevant questions that are closely on target.
The elegance of this type of test would make us much more standardized. It would increase accuracy and allow for better human and computer diagnosis/scoring.
With this said, why do we, as a profession, cling to formats containing so much superfluous, invalidated fluff? As examiners, we know that the attention span and time in the chair for the examinee is a precious commodity that should not be squandered on questions that have no diagnostic value? We have all seen the diminishing response capability as the test goes on.